<
algorithm> (CRC or "
cyclic redundancy code") A number derived
from, and stored or transmitted with, a block of data in order
to detect corruption. By recalculating the CRC and comparing
it to the value originally transmitted, the receiver can
detect some types of transmission errors.
A CRC is more complicated than a
checksum. It is calculated
using division either using
shifts and
exclusive ORs or
table lookup (
modulo 256 or 65536).
The CRC is "redundant" in that it adds no information. A
single corrupted
bit in the data will result in a one bit
change in the calculated CRC but multiple corrupted bits may
cancel each other out.
CRCs treat blocks of input bits as coefficient-sets for
polynomials. E.g., binary 10100000 implies the polynomial:
1*x^7 + 0*x^6 + 1*x^5 + 0*x^4 + 0*x^3 + 0*x^2 + 0*x^1 + 0*x^0.
This is the "message polynomial". A second polynomial, with
constant coefficients, is called the "generator polynomial".
This is divided into the message polynomial, giving a quotient
and remainder. The coefficients of the remainder form the
bits of the final CRC. So, an order-33 generator polynomial
is necessary to generate a 32-bit CRC. The exact bit-set used
for the generator polynomial will naturally affect the CRC
that is computed.
Most CRC implementations seem to operate 8 bits at a time by
building a table of 256 entries, representing all 256 possible
8-bit byte combinations, and determining the effect that each
byte will have. CRCs are then computed using an input byte to
select a 16- or 32-bit value from the table. This value is
then used to update the CRC.
Ethernet packets have a 32-bit CRC. Many disk formats
include a CRC at some level.
(1997-08-02)